2,780 research outputs found
Optimisation of pipeline route in the presence of obstacles based on a least cost path algorithm and laplacian smoothing
Subsea pipeline route design is a crucial task for the offshore oil and gas industry, and the route selected can significantly affect the success or failure of an offshore project. Thus, it is essential to design pipeline routes to be eco-friendly, economical and safe. Obstacle avoidance is one of the main problems that affect pipeline route selection. In this study, we propose a technique for designing an automatic obstacle avoidance. The Laplacian smoothing algorithm was used to make automatically generated pipeline routes fairer. The algorithms were fast and the method was shown to be effective and easy to use in a simple set of case studies
Application of morphing technique with mesh-merging in rapid hull form generation
ABSTRACTMorphing is a geometric interpolation technique that is often used by the animation industry to transform one form into another seemingly seamlessly. It does this by producing a large number of ‘intermediate’ forms between the two ‘extreme’ or ‘parent’ forms. It has already been shown that morphing technique can be a powerful tool for form design and as such can be a useful addition to the armoury of product designers. Morphing procedure itself is simple and consists of straightforward linear interpolation. However, establishing the correspondence between vertices of the parent models is one of the most difficult and important tasks during a morphing process. This paper discusses the mesh-merging method employed for this process as against the already established mesh-regularising method. It has been found that the merging method minimises the need for manual manipulation, allowing automation to a large extent
NETS: Extremely fast outlier detection from a data stream via set-based processing
This paper addresses the problem of efficiently detecting outliers from a data stream as old data points expire from and new data points enter the window incrementally. The proposed method is based on a newly discovered characteristic of a data stream that the change in the locations of data points in the data space is typically very insignificant. This observation has led to the finding that the existing distance-based outlier detection algorithms perform excessive unnecessary computations that are repetitive and/or canceling out the effects. Thus, in this paper, we propose a novel set-based approach to detecting outliers, whereby data points at similar locations are grouped and the detection of outliers or inliers is handled at the group level. Specifically, a new algorithm NETS is proposed to achieve a remarkable performance improvement by realizing set-based early identification of outliers or inliers and taking advantage of the net effect between expired and new data points. Additionally, NETS is capable of achieving the same efficiency even for a high-dimensional data stream through two-level dimensional filtering. Comprehensive experiments using six real-world data streams show 5 to 25 times faster processing time than state-of-the-art algorithms with comparable memory consumption. We assert that NETS opens a new possibility to real-time data stream outlier detection
Recommended from our members
Noise Robust Pitch Tracking by Subband Autocorrelation Classification
Speech pitch tracking is one of the elementary tasks of the Computational Auditory Scene Analysis (CASA). While a human can easily listen to the voiced pitch in highly noisy recordings, the performance of automatic speech pitch tracking degrades in unknown noisy audio conditions. Traditional pitch trackers use either autocorrelation or the Fourier transform to calculate periodicity, which works well for clean recordings. For noisy recordings, however, the accuracy of these pitch trackers degrades in general. For example, the information in parts of the frequency spectrum may be lost due to analog radio band transmission and/or contain additive noise of various kinds. Instead of explicitly using the most obvious features of autocorrelation, we propose a trained classier-based approach, which we call Subband Autocorrelation Classification (SAcC). A multi-layer perceptron (MLP) classier is trained on the principal components of the autocorrelations of subbands from an auditory filterbank. The output of the MLP classifier is temporally smoothed to produce the pitch track by finding the Viterbi path of a Hidden Markov Model (HMM). Training on various types of noisy speech recordings leads to a great increase in performance over state-of-the-art algorithms, according to both the traditional Gross Pitch Error (GPE) measure, and a proposed novel Pitch Tracking Error (PTE) which more fully reflects the accuracy of both pitch estimation/extraction and voicing detection in a single measure. To verify the generalization and specificity of SAcC, we test SAcC on a real world problem that has a large-scale noisy speech corpus. The data is from the DARPA Robust Automatic Transcription of Speech (RATS) program. The experiments on the performance evaluation of SAcC pitch tracking confirm the generalization power of SAcC across various unknown noise conditions and distinct speech corpora. We also report the use of SAcC output adds a significant improvement to a Speaker Identification (SID) system for RATS as well, suggesting the potential contribution of SAcC pitch tracking in the higher-level tasks
Progressive Processing of Continuous Range Queries in Hierarchical Wireless Sensor Networks
In this paper, we study the problem of processing continuous range queries in
a hierarchical wireless sensor network. Contrasted with the traditional
approach of building networks in a "flat" structure using sensor devices of the
same capability, the hierarchical approach deploys devices of higher capability
in a higher tier, i.e., a tier closer to the server. While query processing in
flat sensor networks has been widely studied, the study on query processing in
hierarchical sensor networks has been inadequate. In wireless sensor networks,
the main costs that should be considered are the energy for sending data and
the storage for storing queries. There is a trade-off between these two costs.
Based on this, we first propose a progressive processing method that
effectively processes a large number of continuous range queries in
hierarchical sensor networks. The proposed method uses the query merging
technique proposed by Xiang et al. as the basis and additionally considers the
trade-off between the two costs. More specifically, it works toward reducing
the storage cost at lower-tier nodes by merging more queries, and toward
reducing the energy cost at higher-tier nodes by merging fewer queries (thereby
reducing "false alarms"). We then present how to build a hierarchical sensor
network that is optimal with respect to the weighted sum of the two costs. It
allows for a cost-based systematic control of the trade-off based on the
relative importance between the storage and energy in a given network
environment and application. Experimental results show that the proposed method
achieves a near-optimal control between the storage and energy and reduces the
cost by 0.989~84.995 times compared with the cost achieved using the flat
(i.e., non-hierarchical) setup as in the work by Xiang et al.Comment: 41 pages, 20 figure
Recommended from our members
Improving the Vertical Accuracy of Indoor Positioning for Emergency Communication
The emergency communication systems are undergoing a transition from the PSTN-based legacy system to an IP-based next generation system. In the next generation system, GPS accurately provides a user's location when the user makes an emergency call outdoors using a mobile phone. Indoor positioning, however, presents a challenge because GPS does not generally work indoors. Moreover, unlike outdoors, vertical accuracy is critical indoors because an error of few meters will send emergency responders to a different floor in a building. This paper presents an indoor positioning system which focuses on improving the accuracy of vertical location. We aim to provide floor-level accuracy with minimal infrastructure support. Our approach is to use multiple sensors available in today's smartphones to trace users' vertical movements inside buildings. We make three contributions. First, we present the elevator module for tracking a user's movement in elevators. The elevator module addresses three core challenges that make it difficult to accurately derive displacement from acceleration. Second, we present the stairway module which determines the number of floors a user has traveled on foot. Unlike previous systems that track users' foot steps, our stairway module uses a novel landing counting technique. Third, we present a hybrid architecture that combines the sensor-based components with minimal and practical infrastructure. The infrastructure provides initial anchor and periodic corrections of a user's vertical location indoors. The architecture strikes the right balance between the accuracy of location and the feasibility of deployment for the purpose of emergency communication
Deep Semi-supervised Anomaly Detection with Metapath-based Context Knowledge
Graph anomaly detection has attracted considerable attention in recent years.
This paper introduces a novel approach that leverages metapath-based
semi-supervised learning, addressing the limitations of previous methods. We
present a new framework, Metapath-based Semi-supervised Anomaly Detection
(MSAD), incorporating GCN layers in both the encoder and decoder to efficiently
propagate context information between abnormal and normal nodes. The design of
metapath-based context information and a specifically crafted anomaly community
enhance the process of learning differences in structures and attributes, both
globally and locally. Through a comprehensive set of experiments conducted on
seven real-world networks, this paper demonstrates the superiority of the MSAD
method compared to state-of-the-art techniques. The promising results of this
study pave the way for future investigations, focusing on the optimization and
analysis of metapath patterns to further enhance the effectiveness of anomaly
detection on attributed networks
Graph Anomaly Detection with Graph Neural Networks: Current Status and Challenges
Graphs are used widely to model complex systems, and detecting anomalies in a
graph is an important task in the analysis of complex systems. Graph anomalies
are patterns in a graph that do not conform to normal patterns expected of the
attributes and/or structures of the graph. In recent years, graph neural
networks (GNNs) have been studied extensively and have successfully performed
difficult machine learning tasks in node classification, link prediction, and
graph classification thanks to the highly expressive capability via message
passing in effectively learning graph representations. To solve the graph
anomaly detection problem, GNN-based methods leverage information about the
graph attributes (or features) and/or structures to learn to score anomalies
appropriately. In this survey, we review the recent advances made in detecting
graph anomalies using GNN models. Specifically, we summarize GNN-based methods
according to the graph type (i.e., static and dynamic), the anomaly type (i.e.,
node, edge, subgraph, and whole graph), and the network architecture (e.g.,
graph autoencoder, graph convolutional network). To the best of our knowledge,
this survey is the first comprehensive review of graph anomaly detection
methods based on GNNs.Comment: 9 pages, 2 figures, 1 tables; to appear in the IEEE Access (Please
cite our journal version.
- …